Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
JMIR Mhealth Uhealth ; 10(9): e38364, 2022 09 19.
Article in English | MEDLINE | ID: covidwho-2054780

ABSTRACT

BACKGROUND: Symptom checkers are clinical decision support apps for patients, used by tens of millions of people annually. They are designed to provide diagnostic and triage advice and assist users in seeking the appropriate level of care. Little evidence is available regarding their diagnostic and triage accuracy with direct use by patients for urgent conditions. OBJECTIVE: The aim of this study is to determine the diagnostic and triage accuracy and usability of a symptom checker in use by patients presenting to an emergency department (ED). METHODS: We recruited a convenience sample of English-speaking patients presenting for care in an urban ED. Each consenting patient used a leading symptom checker from Ada Health before the ED evaluation. Diagnostic accuracy was evaluated by comparing the symptom checker's diagnoses and those of 3 independent emergency physicians viewing the patient-entered symptom data, with the final diagnoses from the ED evaluation. The Ada diagnoses and triage were also critiqued by the independent physicians. The patients completed a usability survey based on the Technology Acceptance Model. RESULTS: A total of 40 (80%) of the 50 participants approached completed the symptom checker assessment and usability survey. Their mean age was 39.3 (SD 15.9; range 18-76) years, and they were 65% (26/40) female, 68% (27/40) White, 48% (19/40) Hispanic or Latino, and 13% (5/40) Black or African American. Some cases had missing data or a lack of a clear ED diagnosis; 75% (30/40) were included in the analysis of diagnosis, and 93% (37/40) for triage. The sensitivity for at least one of the final ED diagnoses by Ada (based on its top 5 diagnoses) was 70% (95% CI 54%-86%), close to the mean sensitivity for the 3 physicians (on their top 3 diagnoses) of 68.9%. The physicians rated the Ada triage decisions as 62% (23/37) fully agree and 24% (9/37) safe but too cautious. It was rated as unsafe and too risky in 22% (8/37) of cases by at least one physician, in 14% (5/37) of cases by at least two physicians, and in 5% (2/37) of cases by all 3 physicians. Usability was rated highly; participants agreed or strongly agreed with the 7 Technology Acceptance Model usability questions with a mean score of 84.6%, although "satisfaction" and "enjoyment" were rated low. CONCLUSIONS: This study provides preliminary evidence that a symptom checker can provide acceptable usability and diagnostic accuracy for patients with various urgent conditions. A total of 14% (5/37) of symptom checker triage recommendations were deemed unsafe and too risky by at least two physicians based on the symptoms recorded, similar to the results of studies on telephone and nurse triage. Larger studies are needed of diagnosis and triage performance with direct patient use in different clinical environments.


Subject(s)
Decision Support Systems, Clinical , Emergency Service, Hospital , Physicians , Adolescent , Adult , Aged , Emergency Service, Hospital/organization & administration , Female , Humans , Middle Aged , Surveys and Questionnaires , Triage/methods , Young Adult
2.
Yearb Med Inform ; 31(1): 67-73, 2022 Aug.
Article in English | MEDLINE | ID: covidwho-1873590

ABSTRACT

OBJECTIVE: To assess the impact of open-source projects on making healthcare systems more resilient, accessible and equitable. METHODS: In response to the International Medical Informatics Association (IMIA) call for working group contributions for the IMIA Yearbook, the Open Source Working Group (OSWG) conducted a rapid review of current open source digital health projects to illustrate how they can contribute to making healthcare systems more resilient, accessible and equitable. We sought case studies from the OSWG membership to illustrate these three concepts and how open source software (OSS) addresses these concepts in the real world. These case studies are discussed against the background of literature identified through the rapid review. RESULTS: To illustrate the concept of resilience, we present case studies from the adoption of District Health Information Software version 2 (DHIS2) for managing the Covid pandemic in Rwanda, and the adoption of the OpenEHR open Health IT standard. To illustrate accessibility, we show how open source design systems for user interface design have been used by governments to ensure accessibility of digital health services for patients and healthy individuals, and by the OpenMRS community to standardise their user interface design. Finally, to illustrate the concept of equity, we describe the OpenWHO framework and two open source digital health projects, GNU Health and openIMIS, that both aim to reduce health inequities through the use of open source digital health software. CONCLUSION: This review has demonstrated that open source software addresses many of the challenges involved in making healthcare more accessible, equitable and resilient in high and low income settings.


Subject(s)
COVID-19 , Medical Informatics , Humans , Software , Delivery of Health Care , Pandemics
3.
Yearb Med Inform ; 30(1): 38-43, 2021 Aug.
Article in English | MEDLINE | ID: covidwho-1196878

ABSTRACT

OBJECTIVES: The emerging COVID-19 pandemic has caused one of the world's worst health disasters compounded by social confusion with misinformation, the so-called "Infodemic". In this paper, we discuss how open technology approaches - including data sharing, visualization, and tooling - can address the COVID-19 pandemic and infodemic. METHODS: In response to the call for participation in the 2020 International Medical Informatics Association (IMIA) Yearbook theme issue on Medical Informatics and the Pandemic, the IMIA Open Source Working Group surveyed recent works related to the use of Free/Libre/Open Source Software (FLOSS) for this pandemic. RESULTS: FLOSS health care projects including GNU Health, OpenMRS, DHIS2, and others, have responded from the early phase of this pandemic. Data related to COVID-19 have been published from health organizations all over the world. Civic Technology, and the collaborative work of FLOSS and open data groups were considered to support collective intelligence on approaches to managing the pandemic. CONCLUSION: FLOSS and open data have been effectively used to contribute to managing the COVID-19 pandemic, and open approaches to collaboration can improve trust in data.


Subject(s)
COVID-19 , Information Dissemination , Software , Access to Information , Health Information Exchange , Humans
4.
BMJ Open ; 10(12): e040269, 2020 12 16.
Article in English | MEDLINE | ID: covidwho-979613

ABSTRACT

OBJECTIVES: To compare breadth of condition coverage, accuracy of suggested conditions and appropriateness of urgency advice of eight popular symptom assessment apps. DESIGN: Vignettes study. SETTING: 200 primary care vignettes. INTERVENTION/COMPARATOR: For eight apps and seven general practitioners (GPs): breadth of coverage and condition-suggestion and urgency advice accuracy measured against the vignettes' gold-standard. PRIMARY OUTCOME MEASURES: (1) Proportion of conditions 'covered' by an app, that is, not excluded because the user was too young/old or pregnant, or not modelled; (2) proportion of vignettes with the correct primary diagnosis among the top 3 conditions suggested; (3) proportion of 'safe' urgency advice (ie, at gold standard level, more conservative, or no more than one level less conservative). RESULTS: Condition-suggestion coverage was highly variable, with some apps not offering a suggestion for many users: in alphabetical order, Ada: 99.0%; Babylon: 51.5%; Buoy: 88.5%; K Health: 74.5%; Mediktor: 80.5%; Symptomate: 61.5%; Your.MD: 64.5%; WebMD: 93.0%. Top-3 suggestion accuracy was GPs (average): 82.1%±5.2%; Ada: 70.5%; Babylon: 32.0%; Buoy: 43.0%; K Health: 36.0%; Mediktor: 36.0%; Symptomate: 27.5%; WebMD: 35.5%; Your.MD: 23.5%. Some apps excluded certain user demographics or conditions and their performance was generally greater with the exclusion of corresponding vignettes. For safe urgency advice, tested GPs had an average of 97.0%±2.5%. For the vignettes with advice provided, only three apps had safety performance within 1 SD of the GPs-Ada: 97.0%; Babylon: 95.1%; Symptomate: 97.8%. One app had a safety performance within 2 SDs of GPs-Your.MD: 92.6%. Three apps had a safety performance outside 2 SDs of GPs-Buoy: 80.0% (p<0.001); K Health: 81.3% (p<0.001); Mediktor: 87.3% (p=1.3×10-3). CONCLUSIONS: The utility of digital symptom assessment apps relies on coverage, accuracy and safety. While no digital tool outperformed GPs, some came close, and the nature of iterative improvements to software offers scalable improvements to care.


Subject(s)
General Practitioners , Humans , Mobile Applications , Primary Health Care , Symptom Assessment
SELECTION OF CITATIONS
SEARCH DETAIL